6 research outputs found
HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN MOBILE COMMUNICATION NETWORKS
In cellular Networks, a mobile station (MS’s) move from one cell region to another on seamless Communicationscheduling.. Handoff or Handover is an essential issue for the seamless communication. Several approaches havebeen proposed for handoff performance analysis in mobile communication systems. In Code-Division Multiple-Access (CDMA) systems with soft handoff, mobile stations (MS’s) within a soft-handoff region (SR) use multipleradio channels and receive their signals from multiple base stations (BS’s) simultaneously. Consequently, SR’sshould be investigated for handoff analysis in CDMA systems. In this paper, a model for soft handoff in CDMAnetworks is developed by initiating an overlap region between adjacent cells facilitating the derivation of handoffmanageability performance model. We employed an empirical modelling approach to support our analyticalfindings, measure and investigated the performance characteristics of typical communication network over a specificperiod from March to June, 2013 in an established cellular communication network operator in Nigeria. Theobserved data parameters were used as model predictors during the simulation phase. Simulation results revealedthat increased system capacity degrades the performance of the network due to congestion, dropping and callblocking, which the system is most likely to experience, but the rate of those factors could be minimized by properlyconsidering the handoff probabilities level. Comparing our results, we determined the effective and efficientperformance model and recommend it to network operators for an enhanced Quality of Service (QoS), which willpotentially improve the cost-value ratio for mobile users and thus confirmed that Soft Handoff (SH) performancemodel should be carefully implemented to minimize cellular communication system defects.Keywords: CDMA, QoS, optimization, Handoff Manageability, Congestion, Call Blocking and Call Dropping,
The Illusion in the Presentation of the Rank of a Web Page with Dangling Links
The hyperlinked display list of search results from any given search
engine is an illusion of the real order in priority if its position is
based on a pageRank computation. This is because, several factors
governs the pageRank computation. Besides the fact that different
search engines fashions their ranking model, it is an established fact
that, the hub and authority factor must be considered. This paper
considered the effects of an in-bound (authority) and out-bound (hub)
links on the rank of a page. Hyperlink-Induced Topic Search (HITS)
(also known as hubs and authorities) is a link analysis algorithm that
rates Web pages, developed by Jon Kleinberg (1999). It was a precursor
to PageRank. The idea behind Hubs and Authorities stemmed from a
particular insight into the creation of web pages when the Internet was
originally forming; that is, certain web pages, known as hubs, served
as large directories that were not actually authoritative in the
information that it held, but were used as compilations of a broad
catalog of information that led users directly to other authoritative
pages. In other words, a good hub represented a page that pointed to
many other pages, and a good authority represented a page that was
linked by many different hubs. A good hub page is one that points to
many good authorities; a good authority page is one that is pointed to
by many good hub pages. We focused on the Google\u2019s Toolbar with
regard to pages given a certain toolbar Page Rank, but with an inbound
link from a page that has a toolbar Page Rank, which is higher by one.
This is done to alleviate the effect of the removal from the
database(s), pages with no outbound links as proposed by Page and Brin,
and applied by Google for the normalization of the dangling links. We
considered, six (6) Web site clips scenarios to show the effects of
inbound and outbound links, vis a vis the number of Pages in the Web,
and the influence of the Damping Factor. We observed that the linkage
of sets of dangling sites/pages (PDF, MS-word infested pages) and the
application of a new ranking model for them is better in smoothening,
the hub and authority complimentary effects in page ranking. We gave
recommendations on how search engines and crawlers could weight and
produce a better ranking of pages for users. Thus, users can within the
first few search results, get to their expected search results
A Comparative Analysis of Structured and Object-Oriented Programming Methods
The concepts of structured and object-oriented programming methods are
not relatively new but these approaches are still very much useful and
relevant in today’s programming paradigm. In this paper, we
distinguish the features of structured programs from that of object
oriented programs. Structured programming is a method of organizing and
coding programs that can provide easy understanding and modification,
whereas objectoriented programming (OOP) consists of a set of objects,
which can vary dynamically, and which can execute by acting and
reacting to each other, in much the same way that a real-world process
proceeds (the interaction of realworld objects). An object-oriented
approach makes programs more intuitive to design, faster to develop,
more amenable to modifications, and easier to understand. With the
traditional, procedural-oriented/structured programming, a program
describes a series of steps to be performed (an algorithm). In the
object-oriented view of programming, instead of programs consisting of
sets of data loosely coupled to many different procedures,
objectoriented programs consist of software modules called objects that
encapsulate both data and processing while hiding their inner
complexities from programmers and hence from other objects
Qualities of Grid Computing That Can Last For Ages
Grid computing has emerged as an important new field, distinguished
from conventional distributed computing based on its abilities on
large-scale resource sharing and services. And it will even become more
popular because of the benefits it can offer over the traditional
supercomputers, and other forms of distributed computing. This paper
examines these benefits and also discusses why grid computing will
continue to enjoy greater popularity and patronage in many years to
come. Finally, we discussed about virtual organization (VO) as one of
the key characteristics of Grid computing
A Comparative Analysis of Structured and Object-Oriented Programming Methods
The concepts of structured and object-oriented programming methods are
not relatively new but these approaches are still very much useful and
relevant in today\u2019s programming paradigm. In this paper, we
distinguish the features of structured programs from that of object
oriented programs. Structured programming is a method of organizing and
coding programs that can provide easy understanding and modification,
whereas objectoriented programming (OOP) consists of a set of objects,
which can vary dynamically, and which can execute by acting and
reacting to each other, in much the same way that a real-world process
proceeds (the interaction of realworld objects). An object-oriented
approach makes programs more intuitive to design, faster to develop,
more amenable to modifications, and easier to understand. With the
traditional, procedural-oriented/structured programming, a program
describes a series of steps to be performed (an algorithm). In the
object-oriented view of programming, instead of programs consisting of
sets of data loosely coupled to many different procedures,
objectoriented programs consist of software modules called objects that
encapsulate both data and processing while hiding their inner
complexities from programmers and hence from other objects
On an Improved Fuzzy C-Means Clustering Algorithm
A cluster is a gathering of similar objects which can exhibit
dissimilarity to the objects of other clusters. Clustering algorithms
may be classified as: Exclusive, Overlapping, Hierarchical, and
Probabilistic; and several algorithms have been formulated for
classification and found useful in different areas of application. The
K-means, Fuzzy C-means, Hierarchical clustering, and Mixture of
Gaussians are the most prominent of them. Our interest on this work is
on the web search engines. In this paper, we examined the fuzzy c-means
clustering algorithm in anticipation to improving upon its application
area. On the Web, classification of page content is essential to
focused crawling. Focused crawling supports the development of web
directories, to topic-specific web link analysis, and to analysis of
the topical structure of the Web. Web page classification can also help
improve the quality of web search. Page classification is the process
of assigning a page to one or more predefined category label. In all,
the tendency for a web page to contain the qualities of two or more
clusters could exist. Thus exclusive clustering would not be very
useful in our case; so the need for overlapping clustering using Fuzzy
C-means. It is worthy of note that the Fuzzy C-mean being an
optimization problem, converges to a local minimum or a saddle point.
The iteration in some cases becomes recurring. At such a point, one
would assume the saddle point is reached and if the iteration is not
terminated, the loop may continue to a stack-grab that may fault
(increase running time, etc) the algorithm. In this work, we developed
a modified fuzzy C-mean clustering algorithm with a sharp stopping
condition which was tested on a demo data to ascertain its convergence
and comparatively test its efficiency. Corel Q-pro optimizer was used
on a timing macro. Our result(s) are quite interesting and challenging
as they clearly show the presence of inter-lapping documents along the
spectrum of two different clusters